128 research outputs found

    Bringing tabletop technologies to kindergarten children

    Get PDF
    Taking computer technology away from the desktop and into a more physical, manipulative space, is known that provide many benefits and is generally considered to result in a system that is easier to learn and more natural to use. This paper describes a design solution that allows kindergarten children to take the benefits of the new pedagogical possibilities that tangible interaction and tabletop technologies offer for manipulative learning. After analysis of children's cognitive and psychomotor skills, we have designed and tuned a prototype game that is suitable for children aged 3 to 4 years old. Our prototype uniquely combines low cost tangible interaction and tabletop technology with tutored learning. The design has been based on the observation of children using the technology, letting them freely play with the application during three play sessions. These observational sessions informed the design decisions for the game whilst also confirming the children's enjoyment of the prototype

    Facial Emotional Classifier For Natural Interaction

    Get PDF
    The recognition of emotional information is a key step toward giving computers the ability to interact more naturally and intelligently with people. We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of a set of characteristic facial points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles in the eyebrow and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a database of more than 1500 images. The system has been integrated in a 3D engine for managing virtual characters, allowing the exploration of new forms of natural interaction

    A novel tangible interaction authoring tool for creating educational activities: analysis of its acceptance by educators

    Get PDF
    The creation of applications based on tangible interaction (TI applications), particularly on tabletops, is a developing area that requires the collaboration of professionals with expert knowledge in specific domains. Having an authoring tool that facilitates interdisciplinary intervention in the design and implementation of such applications is a current challenge to bring TI to different contexts. This article presents an authoring tool (named EDIT) and analyzes its acceptance by educators for creating educational activities. The novelty of the tool lies in the possibility of creating projects with a schedule of educational activities, sequenced as required for a group of students. In addition, it has specific characteristics for the educational scenario, such as the personalization of feedback and the meta-annotation of projects. Sessions were held with educators (n = 38) to analyze variables related to the Technology Acceptance Model (such as perceived usefulness and perceived ease of use) when creating TI educational activities on tabletops using the EDIT tool. The sessions were observed and recorded on video, and a Focus Group was held afterwards. During the sessions, educators gave a positive assessment in relation to using this type of tool. It was observed that, in general, they find tangible interaction valuable mostly for working with children. Finally, the results show a high acceptance obtained from the TAM and the novel features of EDIT were found to be useful

    Computación afectiva: tecnología y emociones para mejorar la experiencia del usuario

    Get PDF
    Actualmente, la Computación Afectiva es un área de investigación emergente cuyo objetivo es el desarrollo de dispositivos y sistemas capaces de reconocer, interpretar, procesar y/o simular las emociones humanas para mejorar la interacción entre el usuario y la computadora. Estos sistemas “afectivos”, por lo tanto, deben ser capaces de: 1) capturar y reconocer los estados emocionales del usuario a través de mediciones sobre señales generadas en la cara, la voz, el cuerpo, o cualquier otro reflejo del proceso emocional que se esté llevando a cabo; 2) procesar esa información clasificando, gestionando, y aprendiendo por medio de algoritmos que se encargan de recoger y comparar gran cantidad de casos, y que tienen en cuenta los estados emocionales del usuario y, en su caso, del ordenador; y, por último, 3) generar las respuestas y las emociones correspondientes, que pueden expresarse a través de diferentes canales: colores, sonidos, robots, o personajes virtuales dotados de expresiones faciales, gestos, voz, etc.Facultad de Informátic

    Computación afectiva: tecnología y emociones para mejorar la experiencia del usuario

    Get PDF
    Actualmente, la Computación Afectiva es un área de investigación emergente cuyo objetivo es el desarrollo de dispositivos y sistemas capaces de reconocer, interpretar, procesar y/o simular las emociones humanas para mejorar la interacción entre el usuario y la computadora. Estos sistemas “afectivos”, por lo tanto, deben ser capaces de: 1) capturar y reconocer los estados emocionales del usuario a través de mediciones sobre señales generadas en la cara, la voz, el cuerpo, o cualquier otro reflejo del proceso emocional que se esté llevando a cabo; 2) procesar esa información clasificando, gestionando, y aprendiendo por medio de algoritmos que se encargan de recoger y comparar gran cantidad de casos, y que tienen en cuenta los estados emocionales del usuario y, en su caso, del ordenador; y, por último, 3) generar las respuestas y las emociones correspondientes, que pueden expresarse a través de diferentes canales: colores, sonidos, robots, o personajes virtuales dotados de expresiones faciales, gestos, voz, etc.Facultad de Informátic

    Computación afectiva: tecnología y emociones para mejorar la experiencia del usuario

    Get PDF
    Actualmente, la Computación Afectiva es un área de investigación emergente cuyo objetivo es el desarrollo de dispositivos y sistemas capaces de reconocer, interpretar, procesar y/o simular las emociones humanas para mejorar la interacción entre el usuario y la computadora. Estos sistemas “afectivos”, por lo tanto, deben ser capaces de: 1) capturar y reconocer los estados emocionales del usuario a través de mediciones sobre señales generadas en la cara, la voz, el cuerpo, o cualquier otro reflejo del proceso emocional que se esté llevando a cabo; 2) procesar esa información clasificando, gestionando, y aprendiendo por medio de algoritmos que se encargan de recoger y comparar gran cantidad de casos, y que tienen en cuenta los estados emocionales del usuario y, en su caso, del ordenador; y, por último, 3) generar las respuestas y las emociones correspondientes, que pueden expresarse a través de diferentes canales: colores, sonidos, robots, o personajes virtuales dotados de expresiones faciales, gestos, voz, etc.Facultad de Informátic

    2022 EDUCAUSE Horizont Report. Teaching and Learning Edition : K. Pelletier, M. McCormack, J. Reeves, J. Robert, N. Arbino, et al. Boulder, CO: EDUCAUSE, 2022. 58pp. ISBN: 978-1-933046-13-6

    Get PDF
    Desde hace ya muchos años, EDUCAUSE Horizon Report® emite informes anuales en los que perfila las tendencias clave y las tecnologías emergentes que permiten vislumbrar el estado actual y futuro de la educación superior, analizando diferentes escenarios y sus posibles implicaciones. Aquí comentaremos el informe más reciente: 2022 EDUCAUSE Horizon Report | Teaching and Learning Edition, elaborado por un panel de 57 expertos en educación superior procedentes de diferentes partes del mundo.Facultad de Informátic

    ENSA dataset: a dataset of songs by non-superstar artists tested with an emotional analysis based on time-series

    Get PDF
    This paper presents a novel dataset of songs by non-superstar artists in which a set of musical data is collected, identifying for each song its musical structure, and the emotional perception of the artist through a categorical emotional labeling process. The generation of this preliminary dataset is motivated by the existence of biases that have been detected in the analysis of the most used datasets in the field of emotion-based music recommendation. This new dataset contains 234 min of audio and 60 complete and labeled songs. In addition, an emotional analysis is carried out based on the representation of dynamic emotional perception through a time-series approach, in which the similarity values generated by the dynamic time warping (DTW) algorithm are analyzed and then used to implement a clustering process with the K-means algorithm. In the same way, clustering is also implemented with a Uniform Manifold Approximation and Projection (UMAP) technique, which is a manifold learning and dimension reduction algorithm. The algorithm HDBSCAN is applied for determining the optimal number of clusters. The results obtained from the different clustering strategies are compared and, in a preliminary analysis, a significant consistency is found between them. With the findings and experimental results obtained, a discussion is presented highlighting the importance of working with complete songs, preferably with a well-defined musical structure, considering the emotional variation that characterizes a song during the listening experience, in which the intensity of the emotion usually changes between verse, bridge, and chorus

    Avances en el diseño de una herramienta de autor para la creación de actividades educativas basadas en realidad aumentada

    Get PDF
    En este artículo se presentan los avances logrados en el diseño de una herramienta de autor, llamada AuthorAR, orientada a la creación de actividades educativas basadas en realidad aumentada (RA). AuthorAR permite generar actividades de exploración y de estructuración de frases, que pueden favorecer procesos de adquisición del lenguaje y de entrenamiento de la comunicación, por lo que se hará referencia a las posibilidades que ofrece en este sentido. Se presentan aquí: una descripción de esta herramienta de autor, una revisión de antecedentes en la temática y la propuesta de evolución de este proyecto, con los primeros resultados obtenidos y las conclusiones arribadas

    Emotional classification of music using neural networks with the MediaEval dataset

    Get PDF
    The proven ability of music to transmit emotions provokes the increasing interest in the development of new algorithms for music emotion recognition (MER). In this work, we present an automatic system of emotional classification of music by implementing a neural network. This work is based on a previous implementation of a dimensional emotional prediction system in which a multilayer perceptron (MLP) was trained with the freely available MediaEval database. Although these previous results are good in terms of the metrics of the prediction values, they are not good enough to obtain a classification by quadrant based on the valence and arousal values predicted by the neural network, mainly due to the imbalance between classes in the dataset. To achieve better classification values, a pre-processing phase was implemented to stratify and balance the dataset. Three different classifiers have been compared: linear support vector machine (SVM), random forest, and MLP. The best results are obtained with the MLP. An averaged F-measure of 50% is obtained in a four-quadrant classification schema. Two binary classification approaches are also presented: one vs. rest (OvR) approach in four-quadrants and binary classifier in valence and arousal. The OvR approach has an average F-measure of 69%, and the second one obtained F-measure of 73% and 69% in valence and arousal respectively. Finally, a dynamic classification analysis with different time windows was performed using the temporal annotation data of the MediaEval database. The results obtained show that the classification F-measures in four quadrants are practically constant, regardless of the duration of the time window. Also, this work reflects some limitations related to the characteristics of the dataset, including size, class balance, quality of the annotations, and the sound features available
    corecore